46 research outputs found
Lipschitz Robustness of Finite-state Transducers
We investigate the problem of checking if a finite-state transducer is robust
to uncertainty in its input. Our notion of robustness is based on the analytic
notion of Lipschitz continuity --- a transducer is K-(Lipschitz) robust if the
perturbation in its output is at most K times the perturbation in its input. We
quantify input and output perturbation using similarity functions. We show that
K-robustness is undecidable even for deterministic transducers. We identify a
class of functional transducers, which admits a polynomial time
automata-theoretic decision procedure for K-robustness. This class includes
Mealy machines and functional letter-to-letter transducers. We also study
K-robustness of nondeterministic transducers. Since a nondeterministic
transducer generates a set of output words for each input word, we quantify
output perturbation using set-similarity functions. We show that K-robustness
of nondeterministic transducers is undecidable, even for letter-to-letter
transducers. We identify a class of set-similarity functions which admit
decidable K-robustness of letter-to-letter transducers.Comment: In FSTTCS 201
Quantitative Automata under Probabilistic Semantics
Automata with monitor counters, where the transitions do not depend on
counter values, and nested weighted automata are two expressive
automata-theoretic frameworks for quantitative properties. For a well-studied
and wide class of quantitative functions, we establish that automata with
monitor counters and nested weighted automata are equivalent. We study for the
first time such quantitative automata under probabilistic semantics. We show
that several problems that are undecidable for the classical questions of
emptiness and universality become decidable under the probabilistic semantics.
We present a complete picture of decidability for such automata, and even an
almost-complete picture of computational complexity, for the probabilistic
questions we consider
Deterministic Weighted Automata under Partial Observability
Weighted automata is a basic tool for specification in quantitative
verification, which allows to express quantitative features of analysed systems
such as resource consumption. Quantitative specification can be assisted by
automata learning as there are classic results on Angluin-style learning of
weighted automata. The existing work assumes perfect information about the
values returned by the target weighted automaton. In assisted synthesis of a
quantitative specification, knowledge of the exact values is a strong
assumption and may be infeasible. In our work, we address this issue by
introducing a new framework of partially-observable deterministic weighted
automata, in which weighted automata return intervals containing the computed
values of words instead of the exact values. We study the basic properties of
this framework with the particular focus on the challenges o
Approximate Learning of Limit-Average Automata
Limit-average automata are weighted automata on infinite words that use average to aggregate the weights seen in infinite runs. We study approximate learning problems for limit-average automata in two settings: passive and active. In the passive learning case, we show that limit-average automata are not PAC-learnable as samples must be of exponential-size to provide (with good probability) enough details to learn an automaton. We also show that the problem of finding an automaton that fits a given sample is NP-complete. In the active learning case, we show that limit-average automata can be learned almost-exactly, i.e., we can learn in polynomial time an automaton that is consistent with the target automaton on almost all words. On the other hand, we show that the problem of learning an automaton that approximates the target automaton (with perhaps fewer states) is NP-complete. The abovementioned results are shown for the uniform distribution on words. We briefly discuss learning over different distributions
Non-deterministic Weighted Automata on Random Words
We present the first study of non-deterministic weighted automata under probabilistic semantics. In this semantics words are random events, generated by a Markov chain, and functions computed by weighted automata are random variables. We consider the probabilistic questions of computing the expected value and the cumulative distribution for such random variables.
The exact answers to the probabilistic questions for non-deterministic automata can be irrational and are uncomputable in general. To overcome this limitation, we propose an approximation algorithm for the probabilistic questions, which works in exponential time in the automaton and polynomial time in the Markov chain. We apply this result to show that non-deterministic automata can be effectively determinised with respect to the standard deviation metric
IST Austria Technical Report
As hybrid systems involve continuous behaviors, they should be evaluated by quantitative methods, rather than qualitative methods. In this paper we adapt a quantitative framework, called model measuring, to the hybrid systems domain. The model-measuring problem asks, given a model M and a specification, what is the maximal distance such that all models within that distance from M satisfy (or violate) the specification. A distance function on models is given as part of the input of the problem. Distances, especially related to continuous behaviors are more natural in the hybrid case than the discrete case. We are interested in distances represented by monotonic hybrid automata, a hybrid counterpart of (discrete) weighted automata, whose recognized timed languages are monotone (w.r.t. inclusion) in the values of parameters.The contributions of this paper are twofold. First, we give sufficient conditions under which the model-measuring problem can be solved. Second, we discuss the modeling of distances and applications of the model-measuring problem
Edit Distance for Pushdown Automata
The edit distance between two words is the minimal number of word
operations (letter insertions, deletions, and substitutions) necessary to
transform to . The edit distance generalizes to languages
, where the edit distance from to
is the minimal number such that for every word from
there exists a word in with edit distance at
most . We study the edit distance computation problem between pushdown
automata and their subclasses. The problem of computing edit distance to a
pushdown automaton is undecidable, and in practice, the interesting question is
to compute the edit distance from a pushdown automaton (the implementation, a
standard model for programs with recursion) to a regular language (the
specification). In this work, we present a complete picture of decidability and
complexity for the following problems: (1)~deciding whether, for a given
threshold , the edit distance from a pushdown automaton to a finite
automaton is at most , and (2)~deciding whether the edit distance from a
pushdown automaton to a finite automaton is finite.Comment: An extended version of a paper accepted to ICALP 2015 with the same
title. The paper has been accepted to the LMCS journa